Skip to content

Added support for Pub/Sub mode in MultiDbClient #3722

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: vv-active-active-pipeline
Choose a base branch
from

Conversation

vladvildanov
Copy link
Collaborator

Pull Request check-list

Please make sure to review and check all of these items:

  • Do tests and lints pass with this change?
  • Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
  • Is the new or changed code fully tested?
  • Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
  • Is there an example added to the examples folder (if applicable)?

NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.

) -> "PubSubWorkerThread":
def callback():
return self._active_pubsub.run_in_thread(
sleep_time, daemon=daemon, exception_handler=exception_handler, pubsub=pubsub
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to provide the pubsub reference here? If a failover has occurred, shouldn’t _active_pubsub already contain all the necessary information? By passing in this reference, don’t we risk holding onto a pubsub object from a failed instance and potentially trying to work with it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need currently active pub/sub reference to be able to execute any commands on it in any point of time (f.e subscribe to more channels if needed or unsubscribe), if MultiDbClient is in Pub/Sub mode. Also we keep it to be able to re-initialise new pub/sub during failover with the same configuration and re-subscribe to the same channels.

We always update reference when failover happens, so we do not risk to keep stale reference.

https://github.com/redis/redis-py/pull/3722/files#diff-77af8b3b216196f2b0569279fe1c6d9db146e355eba207eeb84021ccf54c3c0eR54

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and it handles the state of self._active_pubsub for which we call run_in_thread, but then a pubsub object is also provided as an argument - I was asking about this one.

r_multi_db.spublish('test-channel', data)
sleep(0.1)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we expect to have happened in the pubsub_thread? Would it be worth validating whether it completed successfully or failed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In pubsub_thread we're validating if we receive expected message from subscribed channel

https://github.com/redis/redis-py/pull/3722/files/bee15d943e3c381ad1b0e9cebe6061cbce973d16#diff-eccf1b0c83ba8b8e9d8dc7e837ba57e529ec23df09f3d6f43f354fdf5584fcf7R281

Do you think we need to verify something else? Like counter to prove that it was called > 0 times?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a check of the thread object results. The thread might have raised an exception due to this assert failure, but unless the results of the thread object are checked, we might never see that something has failed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants